Goto

Collaborating Authors

 geopolitical bias


Geopolitical biases in LLMs: what are the "good" and the "bad" countries according to contemporary language models

Salnikov, Mikhail, Korzh, Dmitrii, Lazichny, Ivan, Karimov, Elvir, Iudin, Artyom, Oseledets, Ivan, Rogov, Oleg Y., Loukachevitch, Natalia, Panchenko, Alexander, Tutubalina, Elena

arXiv.org Artificial Intelligence

This paper evaluates geopolitical biases in LLMs with respect to various countries though an analysis of their interpretation of historical events with conflicting national perspectives (USA, UK, USSR, and China). We introduce a novel dataset with neutral event descriptions and contrasting viewpoints from different countries. Our findings show significant geopolitical biases, with models favoring specific national narratives. Additionally, simple debiasing prompts had a limited effect in reducing these biases. Experiments with manipulated participant labels reveal models' sensitivity to attribution, sometimes amplifying biases or recognizing inconsistencies, especially with swapped labels. This work highlights national narrative biases in LLMs, challenges the effectiveness of simple debiasing methods, and offers a framework and dataset for future geopolitical bias research.


Echoes of Power: Investigating Geopolitical Bias in US and China Large Language Models

Pacheco, Andre G. C., Cavalini, Athus, Comarela, Giovanni

arXiv.org Artificial Intelligence

In particular, the ChatGPT model (GPT-3.5 and GPT-4) [1] has demonstrated its potential to generate human-like conversational abilities, enabling it to engage in meaningful dialogues, answer questions, and generate text across a wide range of topics, including science, entertainment, and politics [13, 14, 20]. The ability of these models to generate coherent and contextually relevant text has made them a powerful tool for content creation and enabling new ways of human-machine interactions. Despite their potential benefits, the widespread adoption of LLMs has raised concerns about their potential misuse, particularly in generating disinformation [16, 23, 25], fake news [11, 27], and hate speech [10, 22]. Beyond these widely recognized concerns, another critical issue has gained increasing attention in recent months: the potential of these models to manipulate public opinion, both due to the inherent biases embedded in their training process and the biases deliberately introduced or reinforced by their developers or maintainers. The most modern LLMs designed to interact with humans are generally trained using at least two phases. First, they are trained on large-scale text corpora, which inevitably incorporate the ideological, cultural, and political perspectives present in the source.


This Land is {Your, My} Land: Evaluating Geopolitical Biases in Language Models

Li, Bryan, Callison-Burch, Chris

arXiv.org Artificial Intelligence

Do the Spratly Islands belong to China, the Philippines, or Vietnam? A pretrained large language model (LLM) may answer differently if asked in the languages of each claimant country: Chinese, Tagalog, or Vietnamese. This contrasts with a multilingual human, who would likely answer consistently. In this work, we show that LLMs recall geopolitical knowledge inconsistently across languages -- a phenomenon we term geopolitical bias. As a targeted case study, we consider territorial disputes, inherently controversial and cross-lingual task. We first introduce the BorderLines dataset of territorial disputes. This covers 256 territories, each of which is associated to a set of multiple-choice questions in the languages of each claimant country (48 languages total). We then pose these questions to LLMs to probe their internal knowledge. Finally, we propose a suite of evaluation metrics based on accuracy, which compares responses with respect to the actual geopolitical situation, and consistency of the responses in different languages. These metrics allow us to quantify several findings, which include instruction-tuned LLMs underperforming base ones, and geopolitical bias being amplified in stronger models. We release our code and dataset to facilitate future investigation and mitigation of geopolitical bias.